Real-time modelling to inform the response to outbreaks

Sebastian Funk

https://epiforecasts.io

Acknowledgements

Former and current members of the epiforecasts group (https://epiforecasts.io):
Akira Endo, Alexis Robert, Ciara McCarthy, Emil Iftekhar,
Friederike Becker, Hannah Choi, Hugo Gruson, James Azam,
James Munday, Joel Hellewell, Joseph Palmer, Kaitlyn Johson,
Kath Sherratt, Liza Hadley, Manuel Stapper, Nikos Bosse,
Robin Thompson, Sam Abbott, Sophie Meakin, Toshiaki Asakura.

Collaborators at LSHTM and elsewhere.

Models are a tool to combine data (what we know) with assumptions and theory (what we think) to learn about what we don’t know.

When data is abundant, models and analytics can generate insight without many additional assumptions.

When data is sparse (e.g. early in an outbreak), modellers need to make more assumptions to generate insights.

January 2020: Can COVID-19 be controlled by contact tracing?

Hellewell et al., Lancet Glob Health, 2020

Probability of control depends on intensity of transmission and contact tracing effort.

Hellewell et al., Lancet Glob Health, 2020

We illustrate the potential impact that flawed model inferences can have on public health policy with the model described […] by Joel Hellewell and colleagues, which is part of the scientific evidence informing the UK Government’s response to COVID-19.

Gudrasani & Ziauddeen, Lancet Glob Health, 2020

“All models are wrong, but some are useful”

George Box

“All models are wrong, but some are useful

  • wrong: how wrong?
  • some: which ones?

How wrong are models?

Evaluation of predictive modelling

Forecasts vs. Scenarios

Image from: https://covid19scenariomodelinghub.org/

Evaluation of forecasts

Assess quality of models by how closely prediction matches reality

Initial short-term forecasting efforts faced multiple challenges

1-week ahead forecasts produced by SPI-M for SAGE in the UK.

Funk et al., medRxiv, 2020

Forecast hubs supported systematic collection of forecasts

Reich et al., Am J Public Health, 2022

Median ensemble outperformed individual models

Sherratt et al., eLife, 2023

Case forecasts were poorly calibrated a few weeks from the forecast date

Sherratt et al., eLife, 2023

Unpredictable changes in human behaviour made forecasting harder

Gimma et al., PLOS Medicine, 2022

Measurements of behaviour changes did not improve predictions

Observed behaviour as predictor: improvement of forecasts, but only once age-specific reporting is taken into account.

Munday et al., PLOS Comp Biol, 2023

Variants as predictor improved forecasts during transitions

https://github.com/epiforecasts/forecast.vocs
https://github.com/jbracher/branching_process_delta

Humans were better than models at predicting cases, but not deaths

Bosse et al., PLOS Comp Biol, 2022

No clear pattern of which type of model performed best

Sherratt et al., in progress

Not all modelling is forecasting, can we evaluate other models?

Any model of the future is a prediction and can be evaluated as such.

Howerton et al., Nat Comm, 2023

Which models are useful?

Utility could be independent of predictive ability

Saltelli, 2018

Evaluation of utility to policy makers

Assess utility by asking recipients of modelling advice?

Meltzer et al., MMWR Weekly Report, 2014

Frieden and Damon, Emerg Inf Dis, 2015

Not all utility is visible

“The Rt estimates generated by the EpiForecasts team were used extensively by the WHO COVID-19 Analytics unit. They formed a core part of two routine analysis pipelines […] regularly presented to the incident management structure at WHO headquarters, including senior management, as well as regional and national WHO offices.”

Abbott & Funk, 2022

Utility as public health impact

  • Modelling can save lives, but it can also do harm. How do we tell one from the other?
  • In order to assess the public health impact of modelling, we need to qualify and quantify it.

Evaluation of the process

Sherratt et al., Wellcome Open Res, 2024

Kucharski et al., PLOS Biology, 2020

Ongoing activity in this space

Scoping review on evaluation of modelling work with Johanna Hanefeld, Emil Iftekhar, Julia Fitzner, Duaa Rao, Aleena Tanveer.

Summary / discussion points

  • Initial work during COVID-19 was strongly driven by available models and data. Future work can be facilitated by R packages etc. that make tools available.
  • More work is needed to determine which data and methods for forecasting best support public health.
  • Any predictive modelling can be evaluated, and we need to develop appropriate methods for doing so.
  • Evaluation of modelling work can relate to model correctness, process, impact, etc.
  • Correctness does not always mean greater/better impact (although there are good reasons to aim for quality and correctness)
  • Evaluating utility to decision makers is not the same as evaluating public health impacts of modelling

Thank you

Slides at

https://epiforecasts.io/slides/basel_20250121.html